fast adjustment can be used
p "MyTwoColumnDataFile.dat" u 1:2 w lp ls 7 ps 2 lc 2
that is meaning use the first and second columns, with line and points ("w lp"), line style 7 (this style is the line with dots at the places where data are provided, "ls 5" would be rectangles, etc.), point size is 2 (here you can adjust the size of those dots), line color is 2 (set as you wish).
re install the i18n plugin to version 11.1.9 or 11.1.10 better than latest , have many issues and it fixed same issue for me
OK, it was a stupid question.
In the experiment of automatic updating binaries, I mistakenly downloaded a cn.pyd to src.
Then I wasted almost a whole day trying to figure out why breakpoints in cn.py do not work trying this and that.
Finally my colleagues suggseted me to check cn._file_ and find out what I reallly imported.
Maybe this experience could save someone's time in the future when they read this.
It seems that at time of writing, Excel's dependency tracking functionality is only able to track spill arrays as if they were any other array, and so it cannot differentiate between a calculation that is element-wise and one that operates on the whole array input argument(s).
The only viable solutions for tasks where one column needs to take outputs from another column as input are:
Use a helper row to hold the true input and then use an external (to Excel's native formulae) means to copy the reference output to the input cell. This could be VBA or, in the worst possible case, a user input with visual feedback if the value is out of date.
Stack multiple (optional) calculations into a single column. So in my real-world example, I may just design for a maximum number of sequential turbine stages and stack those calculations into each column.
Old post, but perhaps this helps somebody:
You could use a top level bus structure to hold all you bus objects, as seen here:
https://de.mathworks.com/help/simulink/slref/simulink.bus.html
Was it possible to solve this issue?
I have similar issue...
React does indeed handle event delegation internally through its SyntheticEvent system. However, there are still specific scenarios where manual event delegation can be beneficial.
React's Built-in Event Delegation
React automatically delegates most events to the document root, so when you write:
javascript<button onClick={handleClick}>Click me</button>
React doesn't actually attach the listener to that specific button - it uses a single delegated listener at the document level.
When Manual Event Delegation is Still Useful
Despite React's internal delegation, manual event delegation is valuable for:
Performance with large dynamic lists (1000+ items)
Complex nested interactions within list items
Mixed event types on the same container
Integration with non-React libraries
Good hint by @SmartMarkCoder
Option Explicit On
Public Class Form1
Dim numberOfSides As Integer = 0
Const skew As Single = Math.PI * 1.5F ' Corrects the rotation
Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
Me.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Dpi
'MessageBox.Show(Me.Width & "," & Me.Height)
End Sub
Protected Overrides Sub OnPaint(e As PaintEventArgs)
Dim centerX As Integer = Me.ClientSize.Width * 2 / 3
Dim centerY As Integer = Me.ClientSize.Height * 1 / 2
Dim centerPoint As New PointF(centerX, centerY)
Dim radius As Integer = Me.ClientSize.Width / 4.5
'
MyBase.OnPaint(e)
If numberOfSides < 3 Then Return
Dim polygon = CreatePolygon(radius, numberOfSides, centerPoint)
Using blackPen As New Pen(Color.Black, 2)
e.Graphics.SmoothingMode = Drawing2D.SmoothingMode.AntiAlias
e.Graphics.DrawPolygon(blackPen, polygon)
End Using
'
e.Graphics.FillRectangle(Brushes.Red, centerX, centerY, 2, 2) ' added to visualise the center point
End Sub
Private Sub ListBox1_SelectedIndexChanged(sender As Object, e As EventArgs) Handles ListBox1.SelectedIndexChanged
numberOfSides = CInt(ListBox1.SelectedItem)
Me.Invalidate()
End Sub
Public Function CreatePolygon(radius As Single, sides As Integer, center As PointF) As PointF()
Dim polygon = New PointF(sides - 1) {}
Dim angle As Single = 360.0F / sides * CSng(Math.PI / 180.0F)
For side As Integer = 0 To sides - 1
polygon(side) = New PointF(
CSng(center.X + radius * Math.Cos(skew + side * angle)),
CSng(center.Y + radius * Math.Sin(skew + side * angle)))
Next
Return polygon
End Function
End Class
The solution for me was to use https://github.com/BtbN/FFmpeg-Builds with variant gpl-shared, and use that as the layer instead of john van sickle's pure static ffmpeg.

Kindly, Let me know if this works!
=SUM(SUMIF(H5:H10,FILTER(E5:E11,F5:F11=B5),I5:I10))
Answering here as I suspect the other answer is AI-generated.
The most likely answer is that you don't have the typescript compiler installed. If you're using npm, you can install it to use anywhere with:
npm install --global typescript
Or for short:
npm i -g typescript
For those who will be struggling to find the affected 4KB aligned packages, follow these steps-
Steps 1: Choose your debug APK from analyze APK menu-
Steps 2: Analyzer will show affected .so library files, those need to be fixed. (In Picture Few libraries marked which is not aligned with The 16KB Page Size)
Copy keywords from affected .so files name and In terminal run the following commands-
cd android
./gradlew app:dependencies
Steps 3: Search for the keywords then you will find the affected package which is using C++ native shared libraries. (In my case i am searching for droidsonroids.gif and found that its coming from Crisp Chat Package)
<picture>
<source srcset="diagram.svg" type="image/svg+xml">
<source srcset="diagram.png" type="image/png">
<img src="diagram.gif" width="620" height="540"
alt="Diagram showing the data channels">
</picture>
So, this was a very unexpected problem, quite a goofy one.
In my application I had
@ComponentScan(basePackages = {"path.to.a.package.from.external.module.with.generic.spring.auth.components",
"path.to.a.package.with.local.spring.security.customizations.with.typo"})
Note the part with.typo - I was completely shadowing my custom implementation, including my custom security chain from above.
Interestingly, Spring did not complain about this problem. Maybe this is expected because the package might exist, but providing no components?
I switched to Idea Ultimate in the meantime and now it shows me such packages, but I just wonder if this is fair tradeoff, given the fact it might cover bugs extremely well.
I was having a similar issue, and after updating Visual Studio to the latest version, the issue was resolved.
Thanks iroha, for your answer, and for clarifying my question. You're right, it's a problem with the interactive rendering. (The "clever but unorthodox" approach was developed by a T.A., so I can take no credit.)
Rendering from the console works just fine. I just hate having two separate files: one with the RMD and one with the code for rendering it twice.
I did try all sorts of variations on knit_print, also to no avail.
Using the editor_options in the YAML does seem to work with ciAUC(), but doesn't seem to help with describeBy(). My workaround for describeBy() has been to use the mat = TRUE argument, save it to an object, and then knitr::kable() the object, though I think I'll switch to Tim G's workaround.
You shoudn't have to create a package.json manually, you can run npm install to create it, install your needed packages and run npm start
the very same error message you can get on Linux, when /tmp is mounted with noexec. Last week I was installing new db on already existing system and this message appeared for me.
You absolutely can.
Just make sure that you zip the folder, then archive the file. You might have to see the GitLab configuration since the file size tends to be high
Could you check your package.json to ensure you have compatible versions?
Or you could also reopen your editor.
Main Problems: You're checking for params.key("king") but should check for the value
You're using @top_discard which contains the card object, not the code
The API calls for replacing the king aren't being executed properly
You need to handle the king replacement as a separate operation
Here's the corrected code: ruby
get("/discard") do
# ... [previous code remains the same until the king section] ...
################################################### start of change suit with king
@is_king = false
if @top_discard.fetch("value") == "KING"
@is_king = true
end
# Check if king parameter is present and not empty
@in_king = false
if params.key?("king") && !params["king"].empty?
king_code = params["king"]
@in_king = true
# Remove the current king from discard pile
deck = cookies[:deck_id]
pile_name = "discard"
# Draw (remove) the current king from discard pile
remove_king_url = "https://deckofcardsapi.com/api/deck/" + deck + "/pile/" + pile_name + "/draw/?cards=" + @top_discard["code"]
HTTP.get(remove_king_url)
# Add the new king to discard pile
add_new_king_url = "https://deckofcardsapi.com/api/deck/" + deck + "/pile/" + pile_name + "/add/?cards=" + king_code
HTTP.get(add_new_king_url)
# Refresh the discard pile data
discard_list = "https://deckofcardsapi.com/api/deck/" + deck + "/pile/" + pile_name + "/list/"
@discard_res = api_response(discard_list, "piles").fetch(pile_name).fetch("cards")
# Update top discard card
@top_discard = @discard_res.last
@discard_arr = @discard_res.map { |card| card.fetch("image") }
end
# Only load kings selection if top card is king AND we haven't already chosen one
if @is_king && !@in_king
new_deck = "https://deckofcardsapi.com/api/deck/new/?cards=KS,KC,KH,KD"
resp = HTTP.get(new_deck)
raw_response = resp.to_s
parsed_response = JSON.parse(raw_response)
@kings_deck_id = parsed_response.fetch("deck_id")
king_draw = "https://deckofcardsapi.com/api/deck/" + @kings_deck_id + "/draw/?count=4"
@cards_to_add = api_response(king_draw, "cards")
king_add = []
@cards_to_add.each do |c|
king_add.push(c.fetch("code"))
end
pile_name = "kings"
cards = king_add.join(",")
pile = "https://deckofcardsapi.com/api/deck/" + @kings_deck_id + "/pile/" + pile_name + "/add/?cards=" + cards
resp = HTTP.get(pile)
pile_list = "https://deckofcardsapi.com/api/deck/" + @kings_deck_id + "/pile/" + pile_name + "/list/"
@kings = api_response(pile_list, "piles").fetch("kings").fetch("cards")
@king_arr = []
@king_codes = []
@kings.each do |c|
@king_arr.push(c.fetch("image"))
@king_codes.push(c.fetch("code"))
end
end
erb(:discard)
end
Key Changes: Fixed parameter checking: Use params.key?("king") && !params["king"].empty?
Use card code instead of object: @top_discard["code"] instead of @top_discard
Proper API sequence: Remove old king → Add new king → Refresh data
Conditional king loading: Only load king options if needed and not already processed
In your ERB template, make sure you have:
erb
<% if @is_king && !@in_king %>
<form action="/discard" method="get">
<h3>Choose a suit for the King:</h3>
<% @kings.each do |king| %>
<label>
<input type="radio" name="king" value="<%= king['code'] %>">
<img src="<%= king['image'] %>" height="100">
</label>
<% end %>
<button type="submit">Change Suit</button>
</form>
<% end %>
Had the same issue. Turns out if you push the phone brightness to 100%, it works perfectly.
Thanks to @FerhatMousavi , I found a solution. I also changed the enter() function to no longer accept any input except ENTER. (Changes are all found in my original question because I didn't realize this Answer button was below the "related questions" section. Honestly, it would make more sense to place it before the "related questions" section.... As in, if your question and replies aren't helping, here are some related questions you might want to check. That's why I didn't notice the button.
Also, @HolyBlackCat , I can't checkmark my own answer for 2 days so.... How do I mark this as solved?
After some more research I found a way to access the getter by importing the store and going that way. Maybe not the correct way but it will do until we move to Pinia.
import { store } from '../..'
[GET_MERGED_ISSUES]: (state) => (position) => {
...
let positionIssue = store.getters[GET_POSITION_ISSUE](position)
...
}
Not much point in seeing the C++ Classes in the editor, you can't edit them from the UE editor, it would open Visual Studio if you wanted to make changes.
Better to just open the project in VS, if you are installing it just now, or haven't done this step yet, you will need these plugins so everything works without a hitch.
Then you build for Development editor in VS, after that you can open UE editor and you should be able to use what you made in C++
app('queue')->connection('redis');
Add this in your AppServiceProvider
public function boot(): void
{
// Force Redis queue connection resolution early to avoid
// 'Call to undefined method Illuminate\Queue\RedisQueue::readyNow()' error in Horizon,
// especially in multi-tenant context.
app('queue')->connection('redis');
// your existing codes.......
}
With the new ::details-content pseudo-element, we no longer need hacks to force <details> blocks open in print. We can simply reveal the hidden content using CSS
@media print {
::details-content {
content-visibility: visible;
height: auto !important;
}
}
Use jwt auth http Only is a secure cookie, the MOST secure type of session. Set the token manually is not secure because you can read it by JS. You dont need to manage the token on client, Only set credentials: 'include' in all your requests and set the correct domain in cors.php. if u can, use Always https
useActionState only available stable in React 19
Add this line to the gradle.properties
org.gradle.jvmargs=-Xms1024m -Xmx4096m
Not sure if this is still relevant, but it’s worth noting you can now delete a Firestore database straight from the Firebase console under Firestore Database → [your DB] → Delete
ik you see this skysorcerer >:)
I was able to fix it by following the code of sample app of google mobile ads flutter sdk.
https://github.com/googleads/googleads-mobile-flutter/tree/main/samples/admob/banner_example
yes my website also showing this error site live but recive 403 how to fix site url: https://fwab.co.uk/
TEMPLATES = [{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [],
"APP_DIRS": True, # <-- must be True
"OPTIONS": {"context_processors": [
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
]},
}]
courses/templates/registration/login.html
path("accounts/login/",
auth_views.LoginView.as_view(template_name="registration/login.html"),
name="login")
When the files are only staged and you want to unstage, just use git reset for that as @neuroine answered:
git reset /path/to/file
But if you created just one commit but now want to soft reset. Using git reset --soft won't work as it will say:
$ git reset --soft HEAD~
fatal: ambiguous argument 'HEAD~': unknown
revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
In that case, we can use update-ref:
git update-ref -d HEAD
Keeping this extra detail here even though it was not directly asked in the question because it is a similar problem and people can stumble upon this and find solution here.
As most of the trading on the RI marketplace is done by bots and algos, doing it manually will never give you good data, nor can you easily sell the RIs you buy. You need to consider market depth and liquidity, but you only have good access to that data when you have a lot of different customers already under management.
So instead of figuring out the CLI yourself, use a discount automation tool (like hykell.com) for it. You free up your time and get more savings compared to doing it yourself.
Necesito seleccionar un dispositivo de audio y luego reproducir el audio en ese dispositivo, que no es el principal para hacer un pre escucha con dos tarjetas de audio
Run this command and your problem will be solved.
composer update livewire/livewire livewire/flux
Hi Could you please ask your question in Hudi Slack channel or raise an Hudi issue.
I came across a similar issue when upgrading from Shibboleth 4 to 5 as well. The Attribute Resolver was just completely ignoring my custom data connector with no error message. The change I had to make was calling super.doParse in my BaseAttributeDefinitionParser. Shibboleth 4 was able to automatically pick up the custom schema without this, but Shibboleth requires the super method to be called.
There are some more information here: https://shibboleth.atlassian.net/wiki/spaces/IDP5/pages/3199512485/Developing+Attribute+Resolver+Extensions
Are the tasks running in private subnets?
If yes, set assignPublicIp: false and ensure a NAT Gateway for outbound.
Also, confirm WP_HOME / WP_SITEURL envs (or DB values) match the ALB DNS. Wrong hostnames often cause 301/302.
setImageBytesData(localStorage.getItem("imageBytes").split(','));
This work's for me
https://dev.to/chamupathi_mendis_cdd19da/integrate-ms-clarity-to-nextjs-app-app-router--241o
If you have any questions, please ask : )
if you are just compiling like C++ code, for me i was having the same issues , the debugger would not stop at the break point so the problem for me was because i set the speed of optimization for C/C++ to the maximum in the properties of the project , after disabling it , the debugger works as it is supposed to work.
just a reminder for me in v3
u can put this in the provider componenet
export const system = createSystem(defaultConfig, {
preflight: false,
});
and pass it to the ChakraProvider as value
hello This's for test !
hello This's for test !
hello This's for test !
hello This's for test !
pip config debug can show you the config file you're using.
My config file is at /Users/**/.config/pip/pip.conf with index-url.
env_var:
env:
global:
/Library/Application Support/pip/pip.conf, exists: False
site:
/Users/**/anaconda3/envs/python/pip.conf, exists: False
user:
/Users/**/.pip/pip.conf, exists: False
/Users/**/.config/pip/pip.conf, exists: True
global.index-url: https://pypi.org/simple
Try git rebase --continue and then check is it working or not. It will be better if you provide some snapshots
If you have a list of points, consider using a Catmull-Rom spline. It is used for pathing through a series of points in what appears to be a "natural" manner. Developed for use in computer graphics, it relies on discrete mathematics. Some examples, though, mention going to "infinity, and beyond!"
Im facing the exact same error, and one IMPORTANT observation I made is that we both use kakao auth.
and when I npm ls queryString it seems like the two framework requires two different version of query-string
[I] J4S0N :: $ npm ls query-string
[email protected] /Users/chaejunseongjason/Desktop/_/DEP_PROJECTS/PF_FOLDER/ParkField
├─┬ @react-native-kakao/[email protected]
│ └── [email protected]
└─┬ @react-navigation/[email protected]
└─┬ @react-navigation/[email protected]
└── [email protected]
# Remove the old X-Frame-Options header
proxy_hide_header X-Frame-Options;
# Add a Content-Security Policy header to allow embeds from your url
add_header Content-Security-Policy "frame-ancestors 'self' your url;";
Another solution using
router.dismissTo({
pathname: 'xxx',
params: {
value:'xxx'
}
});
aku juga sekarang punya masalah seperti ini. apakah sudah dapat solusinya?
std::mem::drop(listener) will unbind without closing connections.
Changed my table to this seems to have fixed the issue.
CREATE TABLE IF NOT EXISTS PMProjectServiceEquipment (
CompanyID INT NOT NULL,
ProjectID INT NOT NULL,
ServiceEquipmentID INT NOT NULL,
PRIMARY KEY(CompanyID, ProjectID, ServiceEquipmentID)
);
Removing the ID auto_increment field I believe was causing an issue because it cannot be set in an insert statement.
Having a primary key of CompanyID, ProjectID, ServiceEquipmentID worked for this instance of my table
See the README.md in my GitHub repo for Step-By-Step Instructions on How to Create a MFE App with Angular Host and Angular Remotes Using Nx. It uses Angular 20 and Nx 21.
Read my article on Medium for step-by-step instructions, on How To Create a MFE App with Angular Host and React Remote.
MFE Angular Host with React Remote using Nx
Here is the source code on my GitHub for MFE Example App with Angular Host and React Remote using Nx. It uses Angular 20, React 19, and Nx 21.
The submit() method submits data directly rather than passing it through your "submit" event listener. Just change the .submit() to .requestSubmit() and this should route it through the event listener.
Try removing everything from your page, test with an empty body, then add your sections back in, testing incrementally. That way you can concentrate on that section with the problem.
I had a below the fold problem, ran the js code through AI and resolved it.
I solved. Gemini and Firebase Console - Hosting in their instructions omitted the step to add domain www.timeatonart.com (which is redirected to timeatonart.com).
I had the same problem from Access VBA. When I renamed my Python file from *.py to *.txt the python code was doing as expected.
I think the speed problem is due to the difference between logical locations and physical locations. When you logically mount your Google Drive in Google Colab, the physical location of the files is very much not on Google Colab.
I tried to find some code I wrote to deal with this, but I couldn't find it.
Caveat: I dealt with the problems described below approximately 12 months ago, so there is a small chance that some things have changed.
My perspective: I'm not a programmer, but I can code in Python. I was a sys/net-admin, teacher, MCSE, "webmaster"--prior to 2005.
Because I cannot cite documentation of my claim, I will describe my problem and solution as proof of my claim. If you believe my claim, you can probably skim or skip this section.
My problem: I had up to 80 GB of (WAV) files that were physically in up to six different Google Drive accounts. With my sym-fu skills, I could effectively mount all six Google Drives at the same in one Colab session. Obviously, Colab did not transfer that data to the physical server on which my Colab session was running.
Let's say I had a Python command to concatenate 30 files into one new file: newFile.wav = concat(listPathFilenames). Those 30 files were physically spread across six different Google Drives. The Python interpreter would request the files from the OS (Colab), and the OS would use filesystem-level operations to move the physical files to the Colab server. Just waiting for 600 MB of files to transfer could take 30 seconds, but the operation would only take 2-5 seconds. (I wasn't really concatenating, you know?)
So, at least for a little while, my solution was to "operate" on the files before I need to operate on the files. My flow allowed me to easily predict which files I would soon need, so I had logic that would do something like
for pathFilename in listPathFilenames:
pathlib.Path('pathFilename.wav').stat()
I had to try out a few different functions to find right one. I didn't want to modify the file. And some functions wouldn't force the physical transfer of the file: like, I think .exists() didn't work. The net effect was that the physical location of the files would be on the Colab server, and when I did the real operations on the files, there would not be a delay as the files were retrieved from Google Drive.
First, I don't have enough knowledge of pip to understand the answer from https://stackoverflow.com/users/14208544/hor-golzari, so I would still incorporate his guidance. (Well, I mean, since you seem to understand it, you should use his knowledge.)
From what I can tell, Colab uses multiple excellent tactics to speed up on-the-fly environment creation. Off the top of my head:
git command, to any destination, is prioritized at the network level.In contrast, the filesystem-level transfers to and from Google Drive and absolutely not prioritized. One way I know that for sure: if you "write" a (large) file Google Drive, if the Colab environment says, "the file has been written," then even a catastrophic failure in your Colab will not prevent the file from reaching Google Drive. How? It's buffered. It's not fast--some files take 15 minutes before I can see them on Google Drive--but it is reliable.
Therefore, I suspect Google Drive won't accomplish what you want: only because Colab has decided to prioritize the speed the physical connection to Google Drive as too slow to be useful.
I'm trying to optimize my Google Colab workflow
I don't know what needs optimizing, but some things I've done (that I can recall off the top of my head):
pip doesn't need to think.The following used to be my template for quickly installing stuff. I still used "requirements.txt" files at the time. I've switched to pyproject.toml, and I guess I would probably use something like pip install {repoTarget}@git+https://{accessTokenGithub}@github.com/{repoOwner}/{repoTarget}.git. idk.
import sys
import subprocess
import pathlib
listPackages = ['Z0Z_tools']
def cloneRepo(repoTarget:str, repoOwner:str = 'hunterhogan') -> None:
if not pathlib.Path(repoTarget).exists():
accessTokenGithub = userdata.get('github_token')
subprocess.run(["git", "clone", f"https://{accessTokenGithub}@github.com/{repoOwner}/{repoTarget}.git"], check=True)
pathFilenameRequirements = pathlib.Path(repoTarget) / 'requirements.txt'
if pathFilenameRequirements.exists():
listPackages.append(f"-r {pathFilenameRequirements}")
sys.path.append(repoTarget)
if 'google.colab' in sys.modules:
from google.colab import drive, userdata
drive.mount('/content/drive')
cloneRepo('stubFileNotFound')
cloneRepo('astToolFactory')
%pip install -q {' '.join(listPackages)}
Switch to token-based authentication using Capacitor Preferences. This approach:
Works consistently across all platforms
Gives you full control over token lifecycle
Avoids WebView cookie limitations
Is the standard for mobile apps
The cookie persistence configuration might work, but token-based auth is more reliable for mobile applications.
I believe the problem occurs because the application does not handle different DPI (Dots Per Inch) settings correctly.
The hardcoded values for centerPoint and radius are not adapted to the DPI scaling of the second computer, which leads to a changed size and position of the polygon.
You should dynamically create centerPoint and radius.
Sep/t bug strikes again, 2025. The weird thing on the cldr website is that most of their examples are taken from the month of September and the page quoted above abbreviates them to Sep.
bird-brawl
I
Create a similar game like flappy bird
Lovable
3:47 AM on Sep 01
Thought for 11 seconds
I'll create a vibrant Flappy Bird-style game with smooth animations and engaging gameplay!
Game Vision:
Colorful, cheerful design with bright sky blues and vibrant greens
Smooth bird physics with satisfying flap mechanics
Animated pipe obstacles and scrolling background
Clean score display and game over screen
Responsive controls (click or spacebar)
Features for first version:
Physics-based bird movement with gravity
Pipe obstacles with collision detection
Score system and high score tracking
Game over and restart functionality
Smooth animations and particle effects
Design Elements:
Sky blue gradient background
Bright yellow bird character
Green pipe obstacles
Clean typography and UI
Satisfying hover and click animations
Let me implement this with a beautiful design system and engaging gameplay:
I'll create a vibrant Flappy Bird-style game with smooth animations and engaging gameplay!
Game Vision:
Colorful, cheerful design with bright sky blues and vibrant greens
Smooth bird physics with satisfying flap mechanics
Animated pipe obstacles and scrolling background
Clean score display and game over screen
Responsive controls (click or spacebar)
Features for first version:
Physics-based bird movement with gravity
Pipe obstacles with collision detection
Score system and high score tracking
Game over and restart functionality
Smooth animations and particle effects
Design Elements:
Sky blue gradient background
Bright yellow bird character
Green pipe obstacles
Clean typography and UI
Satisfying hover and click animations
Let me implement this with a beautiful design system and engaging gameplay:
Thought for 7 seconds
I'll create a vibrant Flappy Bird-style game with smooth animations and engaging gameplay!
Game Vision:
Colorful, cheerful design with bright sky blues and vibrant greens
Smooth bird physics with satisfying flap mechanics
Animated pipe obstacles and scrolling background
Clean score display and game over screen
Responsive controls (click or spacebar)
Features for first version:
Physics-based bird movement with gravity
Pipe obstacles with collision detection
Score system and high score tracking
Game over and restart functionality
Smooth animations and particle effects
Design Elements:
Sky blue gradient background
Bright yellow bird character
Green pipe obstacles
Clean typography and UI
Satisfying hover and click animations
Let me implement this with a beautiful design system and engaging gameplay:
Some scopes in the Photos Library API were recently removed. This includes photoslibrary.readonly. It now only allows access to photos that were generated by your app.
Try Photos Picker API with the scope photospicker.mediaitems.readonly to access your photos in Google Photos.
Compare the two APIs here.
What about just this,
function getItems<T extends "circle" | "square">(type: T) {
return shapes.filter(s => s.type == type) as ((Circle | Square) & {type: T})[];
}
We can also use {% raw %} and {% endraw %} to ignore some files containing Jinja2 syntax. for example:
Assume we have a file called test.py that has some Jinja2 syntax, we can do this:
{% raw %}
# here is our test.py file
a = 2
{{ some syntax that can conflict with cookiecutter }}
b = 5
{% endraw %}
Is working , the problems is the SVG is not a solid stroke , have double storke
try this and look what happens
style="stroke:red;stroke-width:1"
I also am tackling this, you can check out https://github.com/StraReal/Cryptic to see if I solved it yet.
I found the issue: in the run configs the selected module whose classpath is used to run the application was set to aetherian-tools-and-ores-template-1.21.1. main, when it should be set to com.thangs3d.aetheriantoolsandores.aetherian-tools-and-ores-template-1.21.1.main
I haven't tested this but I am interested in using the AUX functionality as well. On the BMI270 datasheet, it mentions on page 162 to set register (0x7D) PWR_CTRL.AUX_EN = 0xb1. I don't see this in your code, perhaps it is not enabled correctly.
Since the writer idiomatically owns the channel, and the reader reads until the channel is closed, It doesn't matter if you send a few more records. The Done branch eventually gets selected returning control to the outer function, which, hopefully, has some kind of deferred close on the channel. Once the reader empties the channel and the channel is closed, the reader exits.
Maybe try AMD's Quark? It can convert fp32 and fp16 to bf16.
If you are facing this in Flutter and you are using awesomenotifications, follow this link https://github.com/flutter/flutter/issues/159519.
I suggest you doing it using Netbeans.
STEP 1:
File -> New Project -> Select Java With Maven -> Select Web Application
Name and Location: You give the project name (this becomes your context path by default)
Settings > Choose Server: choose the server you want to use to run your servlets (Apache Tomcat, GlassFish, etc...)
Click Finish
Now NetBeans generates a basic webapp for you.
STEP 2:
Then, you need to add the servlet :
Right-Click on your project name -> New -> Servlet
Enter Class Name
URL Pattern: /"yourURLPattern"
Click Finish
Netbeans should create the class and register it via @webServlet or web.xml
On the body of the Servlet method, since Tomcat 10+ migrated from javax.servlet to jakarta.servlet, if your code uses javax.*, pick Tomcat 9 or GlassFish. But if you want to keep using Tomcat 10/11, change imports from javax. * to jakarta. *.
STEP 3:
Right-click Project name → Run.
NetBeans will start the server, deploy the app, and open your browser.
Your servlet URL will be:
http://localhost:8080/yourProjectName/URLPattern
If it opens your browser in a port different from 8080, you have to change it accordingly.
See https://stackoverflow.com/a/79751971/4386338 same as the one above answer I think, for me the issue was the phase it was running the report "prepare-package", had to change it to "test" to fix.
If you’re working with an R script that contains multiple user-defined functions and you’d like to see how they interact, you might find my CRAN package funcMapper useful.
It analyses a given R script and produces an interactive visNetwork graph showing the relationships between the functions defined in that script.
The function's parameters are shown below:
funcMapper(script_path, output_name, output_path, source = FALSE, cleanup_temp_file = TRUE)
For example:
funcMapper("main_script", "map_of_main_script", "~/test")
This generates an interactive map (map_of_main_script.html) illustrating how the functions in main_script.R call each other.
CRAN link: https://cran.r-project.org/web/packages/funcMapper/index.html
Github link: https://github.com/antoniof1704/funcMapper
Whenever this happens, I open the code using Notepad++ and remove this 'NULL' (highlighted in the attached image). It always solves the problem.
It sounds like you don't know where your problem is:
you might have incorrect register settings, or just register settings that aren't suitable for your use-case
you might have a bug in your interface driver, which appears to be bit-banging with GPIO rather than a hardware camera interface
you might be doing everything right with the camera but are your image processing algorithm doesn't have the results you want.
You have said that you suspect (1) but have posted code to suggest you might want help with (2). Personally I think (3) is most likely.
I suspect this because just taking the MSB of the Y channel and discarding everything is a very naive process and I would not usually expect this to work. Effectively you are thresholding at 50%. What makes you think 50% is the correct threshold? Maybe you need to not throw all that data away and do some more context aware image processing like using a different threshold in different parts of the image, or maybe you need a further step after thresholding, such as morphological operations.
I would approach the problem like this: get all the data out of your interface driver and look at it as a full colour image. If it looks ok as a photo, then probably (1) and (2) are good and you need to think about (3).
If the image is corrupt then you need to decide between (1) and (2). To do this I would first set the register values back to the default values straight from the manufacturer. If this fixes it the problem was (1), if it is still corrupt then (2).
Obviously this isn't a full solution to the problem, but you need to narrow it down before you dive deep.
Almost as an aside, one quick thing springs to mind is are those discontinuous blocks separated at a fixed spacing? It looks like maybe 8 pixels? If the problem is (1) then YUV422 breaks the image into blocks and discards the chroma data. Maybe YUV444 or even RGB would give better results. If the problem is (2) then a fixed block size like that might mean you are assembling your bits into bytes incorrectly.
I've managed to figure out how to answer my own query. Having everyones input was incredibly helpful in teaching me about certain aspects of Python and how classes interact with dictionaries. All of your answers helped massively to guide me to my solution, so I am very grateful for the contributions from: "Neil Butcher", "Mark Tolonen" and "Marce Puente".
Here is how I managed to get the code to use the dictionary's values for each pokemon and also the data stored within the variables/instances of the class to find and then replace Pokemon's evolutions if they need updating.
Firstly, I needed to change the dictionary from having the keys for each variable as their pokedex number and just use their name as the key, to help the for loop I use later on with its comparisons.
pokemon_database = {
"bulbasaur" : bulbasaur,
"ivysaur" : ivysaur,
"venusaur" : venusaur,
"charmander" : charmander,
"charmeleon" : charmeleon,
"charizard" : charizard,
"squirtle" : squirtle,
"wartortle" : wartortle,
"blastoise" : blastoise,
"caterpie" : caterpie,
"metapod" : metapod,
"butterfree" : butterfree,
"weedle" : weedle,
"kakuna" : kakuna,
"beedrill" : beedrill
}
Then, after much trial and error with different versions of the for loop, I stumbled into creating this loop. As far as I can tell, it loops through the dictionary using the values of the stored items, rather than the keys, and compares these variables' stored "evolution" data if it is a string or not. If it is a string, then it replaces it with the corresponding variable name for the evolution it has found.
def update_evolutions(pokemon_database):
for pkmon in pokemon_database.values():
if pkmon.evolution:
pkmon.evolution = pokemon_database[pkmon.evolution]
updated_pokemon_list.append(pkmon.evolution.name)
update_evolutions(pokemon_database)
Although not part of the actual loop, the "updated_pokemon_list" line is to add the Pokemon that have been found that needed updating, and then adding them to a list so I could check which Pokemon it had found during its loop that needed updating, just to check what it's doing.
I then added some code before and after the loop to make sure it was doing what I intended it to do, and I'll include that in its entirety for transparency.
# Debugging test - See which Pokemon were updateds
updated_pokemon_list: list = []
print(f"Bulbasaur evolves into {bulbasaur.evolution}, then it evolves eventually into {ivysaur.evolution}.") # Wouldn't let me add a further ".name" after each evolution since it caused an error. Showing it was still stored as a string.
# Automatically update any Pokemon's evolutions to link to the correct variable
def update_evolutions(pokemon_database):
for pkmon in pokemon_database.values():
if pkmon.evolution:
pkmon.evolution = pokemon_database[pkmon.evolution]
updated_pokemon_list.append(pkmon.evolution.name)
update_evolutions(pokemon_database)
print(bulbasaur.evolution.name)
print(f"Bulbasaur evolves into {bulbasaur.evolution.name}, then it evolves eventually into {ivysaur.evolution.name}.")
print(updated_pokemon_list)
The output of this block was:
Bulbasaur evolves into ivysaur, then it evolves eventually into venusaur.
Ivysaur
Bulbasaur evolves into Ivysaur, then it evolves eventually into Venusaur.
['Ivysaur', 'Venusaur', 'Charmeleon', 'Charizard', 'Wartortle', 'Blastoise', 'Metapod', 'Butterfree', 'Kakuna', 'Beedrill']
Again, thank you for all your help.
Hello The solution sounds great Unfortunately it doesn't work with Android 11 MediaStore does not save Photo into local application folder from Application.Context.GetExternalFilesDir
Any new suggestion for Android 11 will be great
Thanks a lot
import 'package:url_launcher/url_launcher.dart';
void openWhatsApp(String phone, String message) async {
final url = "https://wa.me/$phone?text=${Uri.encodeComponent(message)}";
if (await canLaunch(url)) {
await launch(url);
} else {
throw 'Could not launch $url';
}
}
The solution to your problem is very simple.
Just assign a value to your name variable.
```ts
let name: string = "";
```
const name = (function() {
switch(index) {
case 0:
return "cat";
case 1:
return "dog";
default:
return "idk";
}
})()
**More on JS IIFE:** https://stackoverflow.com/a/8228308/21962459
You’ve bumped into a hard limitation of WidgetKit. According to Apple’s official documentation:
“Interactions with a toggle or button always guarantee a timeline reload.”
That means every AppIntent you attach to a widget will always trigger getTimeline() afterwards. As far as I know there is no supported way to mark an AppIntent as “side-effect only” or skip the reload. This is by design: WidgetKit treats widget controls as state-changing and therefore refreshes the timeline to ensure the widget reflects the latest state.
Alternative approaches like Link(destination:) don’t cause a timeline refresh, but they necessarily open the app — which doesn’t meet your requirement of sending analytics directly from the widget.
You can install prebuild binary for node-pty by installing this node package:
Requirement: nodejs18 LTS
npm install node-pty-prebuilt-multiarch
after installing:
Replace
import pty from 'node-pty';
With
import pty from 'node-pty-prebuilt-multiarch';
I am using Next JS too. Were you able to limit your RAM. I have 8GB of RAM. I switched from Windows 11 (which used around 4GB of RAM in idle state) to Ubuntu 22.04 (which uses 1.8GB of RAM in idle). Running npm run dev consumes the rest of the RAM I have. Later, I looked at system monitor in Ubuntu, I see swap memory also being used around 3GB. I thought linux would help me. But that does not seem to be the case.
I tried multiple react ga4 libraries to implement on my ecommerce website and all provides the wrapper only. That's why I built a library @connectaryal/google-analytics — a type-safe, developer-friendly GA4 wrapper for React and Next.js.
I tried multiple react ga4 libraries to implement on my ecommerce website and all provides the wrapper only. That's why I built a library @connectaryal/google-analytics — a type-safe, developer-friendly GA4 wrapper for React and Next.js.
I tried multiple react ga4 libraries to implement on my ecommerce website and all provides the wrapper only. That's why I built a library @connectaryal/google-analytics — a type-safe, developer-friendly GA4 wrapper for React and Next.js.
Ok, I was little bit wrong, when i put on same vertical layout one more row with horizontal layout containing one button, this button also is shifted left, therefore all vertical layout is shifted left, but i don't know why
I came across similar kind of issue.
In my case the problem was in my command where I was providing the port after '-p' which is assigned for password. For, port '-P' should be in capital letter, not in small.
So, I replaced this command "mysql -h your-endpoint.rds.amazonaws.com -p your-port -u your-username -p" with "mysql -h your-endpoint.rds.amazonaws.com -P your-port -u your-username -p" and it worked fine. case sensitivity was the main issue.
So after posting here I found the answer.
In my home directory, I had a file .curlrc, which contained
-w="\n"
This affects curl's output. After removing it, everything works as expected.
In Tailwind v4 you can’t use just `@import "tailwindcss";`.
Replace it in `globals.css` with:
@import "tailwindcss/base";
@import "tailwindcss/components";
@import "tailwindcss/utilities";
Also remove the const config = {...} part from CSS (that belongs in config files).
Then keep only import './globals.css' in app/layout.js → your global styles will work.
im with the same error and I've tried in many ways as possible... Here's my code "
class EdamamService:
def __init__(self):
self.api_key = settings.EDAMAM_API_KEY
self.app_id = settings.EDAMAM_API_ID
self.edamam_account_user = settings.EDAMAM_ACCOUNT_USER
#--------------DEBUGGING PURPOSES----------------
print(f"DEBUG: Loading App ID: '{self.app_id}'")
print(f"DEBUG: Loading API Key: '{self.api_key}'")
self.client = httpx.AsyncClient(
base_url=settings.EDAMAM_API_URL,
params={
"beta" : True,
"app_key": self.api_key,
"type": ["public"],
}
)
async def get_meal_planner(self, data: MealPlanRequest) -> MealPlanResponse:
"""
Get meal plan from Edamam API with comprehensive error handling
"""
header = {
'accept': 'application/json',
'Content-Type': 'application/json',
'Edamam-Account-User': self.edamam_account_user
}
try:
# Convert Pydantic model to JSON
request_data_dict = data.model_dump(mode='json', by_alias=True)
response = await self.client.post(
url=f"api/meal-planner/v1/{self.app_id}/select",
json=request_data_dict,
headers=header,
)
# Handle HTTP status errors
response.raise_for_status()
# Parse response JSON
response_data = response.json()
" For some reason, it gives me still the 401 error, even if I use my email in edamam account user, even I create more api keys and try with them... I dont know what to do :/
Check Over.fig chrome extension. It allows for comparing semi transparent overlay above the live site
I solved the problem by moving both routes to web.php.
Turned out all I had to do to fix this was updating e2b-code-interpreter package.