still not working showing this error
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
/tmp/ipython-input-3913369503.py in <cell line: 0>()
3 # Retrieved 2025-11-10, License - CC BY-SA 4.0
4
----> 5 from paddleocr import PaddleOCR, draw_ocr
6 from PIL import Image
7 from IPython import display
ImportError: cannot import name 'draw_ocr' from 'paddleocr' (/usr/local/lib/python3.12/dist-packages/paddleocr/__init__.py)
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
Open Examples
For the RabbitMQ server, you can find out this file with follow path C:\Windows\System32\config\systemprofile\AppData\Roaming\RabbitMQ.erlang.cookie
In my case, the content of file is HQYDPYUYZQFES******
For your user account (CLI client), you can find out this file with follow path C:\Users<your_username>.erlang.cookie
In my case, the content of file is OGIHLSSKESAFW******
After that, you need synchronize content of these files. You run Windows PowerShell by Administrator with follow script
$serverCookiePaths = @(
"$env:APPDATA\RabbitMQ\.erlang.cookie",
"$env:WINDIR\system32\config\systemprofile\AppData\Roaming\RabbitMQ\.erlang.cookie"
)
$userCookiePath = "$env:USERPROFILE\.erlang.cookie"
foreach ($path in $serverCookiePaths) {
if (Test-Path $path) {
Copy-Item $path $userCookiePath -Force
Write-Host "content of these files .erlang.cookie synchronized."
break
}
}
rabbitmq-service.bat stop
rabbitmq-service.bat start
rabbitmqctl.bat status
rabbitmq-plugins.bat enable rabbitmq_management
What is a "syslog" in Windows context?
To achieve persistent anchoring of 3D models in Vuforia after the image target is recognised, you need to transition from image-based tracking to world-based tracking. The key is to use Vuforia's Anchor system. When your image target is first detected, you can create a new `AnchorBehaviour` at the target's position in the world.
This anchor becomes a fixed point in the real world, calculated by Vuforia's internal understanding of the environment. Your 3D models should then be made children of this anchor object. Once this parent-child relationship is established, the models will remain fixed in the virtual world space, independent of the original image target's visibility. The models will now stay in place as the user moves the device, allowing for free exploration of the environment around the anchored content. This approach effectively decouples the models from the image tracker, using the device's spatial awareness to maintain their position.
I know the interface doesn't make it clear but "General Advice/Other" is for questions that seek opinionated advice, not for factual questions like yours. You should probably delete this post and post it with the correct question type as changing the type is currently not supported.
Maybe this is helpful: How to plot normal vectors in each point of the curve with a given length?
sequelize.literal(`'black' = ANY("tag")`)
Wat worked for me was uninstalling Spyder, and then re-installing via the cmd in the Official Anaconda docs
conda install anaconda::spyder
It now works like a charm
def fibonacci_cache(n, cache = {0 : 0, 1 : 1}):
if n in cache:
return cache[n]
else:
cache[n] = fibonacci_cache(n - 1, cache) + fibonacci_cache(n - 2, cache)
return cache[n]
I think you might want to repost this as a normal question, not an open-ended discussion. UI of stackoverflow is confusing right now, you have to select "debugging" as a question type when submitting a question, otherwise it is this weird new type of question.
If you’re encountering Error 153 when trying to load a YouTube iframe or embedded player, it’s because YouTube now requires a valid Referer header to identify the embedding client.
According to YouTube’s updated policy, you must include a Referer parameter when making requests to the embedded player.
See the official documentation here:
🔗 YouTube Embedded Player API Client Identity
To fix this, you can explicitly set the Referer header in your request, like so:
..loadRequest(
Uri.parse("https://www.youtube.com/embed/videoID"),
headers: {
// 🔑 These two lines allow YouTube's referer verification to pass
"Referer": "strict-origin-when-cross-origin",
// "Origin": "https://www.youtube-nocookie.com",
},
)
I would highly recommend if you’re using the ElevenLabs Agent SDK, try combining it with Twilio’s Stream API and a lightweight VAD module (e.g. py-webrtcvad or DeepFilterNet). This allows you to preprocess the incoming audio stream, detect actual user intent, and prevent the Agent from falsely triggering when background noise or other voices are detected. Another option is to use ElevenLabs’ “continuous listening mode” (if available) with a minimum interruption threshold set to a higher level — this ensures the Agent doesn’t stop mid-sentence unless it’s confident that the user is actually responding.
When using a ref, don't forget to add the style property and the unit. (px, %, em). it should look like this :
refToMove.current.style.transform = `translateY(${-x}px)`;
is there any practical difference in using $Collection -notcontains $Item instead of $Item -notin $Collection?
(and relative positive variants, of course)
Since you need the sitemap, I guess SEO matters to you. So, in this case, I think the SPA is not a good solution. If you don't want to the server side, you can try the SSG(static site generation) solution such as Next.js, Gatsby, and Remix. Since you are using the react-router-dom, Remix may be easier for you. Most of the React SSG solutions can automatically generate a sitemap during the build process.
The issue is caused b/c bull automatically attempt to handle you job multiple times and sometimes it might exceed the limit
FIX: update your code like this
const connection = new IORedis({maxRetriesPerRequest:null})
As per the official Pylint extension (version 2025.2) for VS Code 1.105, you will need to add an argument to Pylint's Args list. You can do so in the UI preferences or their corresponding JSON setting as in the following examples (where I have other arguments already):
Likewise to add more modules, append to the same argument:
--generated-members=torch.* cv2.* etc.
I have it downloaded for me before you're going to install it you need to install one more tool that is Dynamics 365 SDK
downloaded and I have the ex here downloads all right so here I have Dynamic 365 sdk when I'm trying to install it it might show you some error about the protected mode so go to more info it will show you the information and click run anyway all right click yes so that installation of this package.
start installing the SDK first after that we will navigate to install option of your developer tool kit all right so we got both this files downloaded.
Then click OK to accept the Microsoft software license teams click continue select one folder where you need to extract this application like okay so it will take a few minutes to extract it once it is extracted then you can install it.
Why not jump to B directly and remove fragment C, then when you remove B it will back to fragment A.
Use android studio's profiler to profile your app
If by "does not work here" you mean Stackoverflow's snippets, then that is due to the restrictions SO has on cross origin and frame based actions. If the code works, it will be runnable from your own server.
@Rani: Oh, how stupid of me! I'm more of a beginner, but this shouldn't have happened, to init lastcontrol every time. I only focused the IF/ELSE, not the event itself.
During my everyday work with Jakarta Server Faces I test as much as possible just below the UI (Subcutaneous Test) - meaning without Arqillian Graphene/Drone/Selenium - by simply calling the backing bean's method in a usual Arquillian Integration Test.
In this case the scope doesn't matter for the test. Hence simply override it via @Specializes or @Alternative. I rather recommend the latter one because a specialized bean's parent needs to be part of the Arquillian @Deployment, leading to unnessecary more code.
Via this way you use official Jakarta and Arquillian Framework tooling.
@David: it's a toggle switch, a binary button. IF runs if the button is true and ELSE runs if the button is false again. But since this handles the same button, I need the last used control's name in both, IF and ELSE.
You can use the command : omz update
Open apps/accounts/apps.py (similar for your other apps like ads).
You need to update the name attribute in the AccountsConfig class to match the full dotted path:
from django.apps import AppConfig
class AccountsConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'apps.accounts' # Change here
Do the same for apps/ads/apps.py:
from django.apps import AppConfig
class AdsConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'apps.ads' # Change here
After making these changes, run python manage.py makemigrations again.
It should work, if not then please let me know.
WinForms.
I suppose, for the others the code would look different. I couldn't find a decent example on the internet yet. I'm wondering how VS is going to interpret enum code as a combobox.
I am trying to register my app through SoundCloud for developers and got redirected to a google form. Is it safe ?
$query
->groupBy('c.id')
->having('COUNT(DISTINCT f.feature_id) = :featuresCount')
->andHaving('SUM(CASE WHEN f.feature_id IN (:features) THEN 1 ELSE 0 END) = :featuresCount')
->setParameter('featuresCount', count($features), ParameterType::INTEGER);
;
| header 1 | header 2 |
|---|---|
| cell 1 | cell 2 |
| cell 3 | cell 4 |
If you want precise cutting without re-encoding, try using -ss and -to with -c copy like this: ffmpeg -ss 00:01:00 -to 00:02:00 -i input.mp4 -c copy output.mp4. For a step-by-step guide and other FFmpeg tricks, check out my site: vedoapk.com
.
i've not worked on or seen datastage, but would it be possible to reverse engineer the transformations by comparing the source & result? pyspark does not have any such built-in methods afaik.
I try this one on the testcaferc.cjs, then it does not show the message "Look for and connect to any device on your local network" anymore
disableNativeAutomation: true,
I previously created a tool to enable the use of .ani files on the web. It's extremely easy to use and basically achieves an effect similar to using .ani files locally in web environments. Although there might be bugs or limitations caused by the rendering strategies of some browsers, you can give it a try: https://github.com/qingzhengQB/ani-cursor.js
After upgrading our application environment to Oracle WebLogic Server 12.2.1.4.0, we started encountering JSP compilation errors that were not present in the previous WebLogic version. The issue occurs during JSP compilation or at runtime when the server attempts to load a JSP page.
weblogic.servlet.jsp.CompilationException: Failed to compile JSP /xyz.jsp
[jspService() method exceeds 65535 bytes limit]
Environment Details:
WebLogic Server Version: 12.2.1.4.0
JDK Version: 1.8.0 (64-bit)
Operating System: Linux
Java VM: HotSpot 64-Bit Server VM
JSP Compiler: Eclipse JDT
Post-upgrade, the JSP compiler is throwing a jspService() method size limit error. The same JSPs compiled successfully on the previous WebLogic version. This issue typically occurs when the generated servlet code for a JSP exceeds the Java method size limit (64 KB). It may also be influenced by differences in JSP compilation behavior between WebLogic’s built-in JSP compiler and the Eclipse JDT compiler introduced in newer versions.
Verified that the JDK and WebLogic versions are compatible.
Enabled the Eclipse JDT compiler by setting the following system property in the startup script:
-Dorg.apache.jasper.compiler.useEclipseCompiler=true
Ensured all required JSP-related JARs (e.g., com.oracle.weblogic.jsp.jar) are present in the classpath.
Despite these configurations, the issue persists with large JSPs containing complex scriptlets or embedded Java code.
Has anyone encountered a similar issue after upgrading to WebLogic 12.2.1.4.0?
After trying to fix this for two nights, I tried checking the service accounts of the project through Firebase's Users and Permissions section.
It took me to a different Cloud Project. It seems the cloud project I was given access to by the owner of the projects wasn't the one being used by the Firebase project..
create a new field one to record the date and time so date and time changed.
up to the user field so we can see who changed it now a tip for you here is when you do a lookup field put ID on the end so that when developers uh can see the field they'll know that the ID basically means this is a lookup and in code terms this is an entity reference which usually means it has the guid and the entity name entity type and so putting ID means it just is kind of lets you know that that is a a lookup field that's good practice so we've got those so the custom fields and put it there so we're going to save and publish so they now should be on the account form so what's going to happen when we're going to create a workflow that's going to be triggered when the primary contract primary contact is assigned or set a value not not assigned when the primary contact is has a value set to it we're going to record the time this happened and the user who did it so you go to.
Pyxl was the main culprit slowing down the process alot , replacing it with fastexcel was very effective, ditching pandas was absolutely worth it .
I'm late to the game. I recently came across this issue where my Laravel app, when it was initially built, used increments() method. However, Laravel is now using the id() method.
None of the answers provided here pointed out the main difference between the two.
increments() uses INTEGER column type and id() uses BIGINT column type in database . INTEGER and BIGINT are different in terms of how much they can store but the other key thing is when you create other tables and foreign keys to id columns, column types have to match.
The idea is to have a text only interface. I'm using ncurses to display info, etc.
Thanks a lot dean and Ollie, I am able to achieve 3-4 secs for 700k records. wouldn't be able to do it without your guidance. I will work on optimising it more. Thanks, a tonne once again
I just did some more debugging and apparently making a FileWriter wiped the text file. I took it away and now my code works.
The issue for me waas png format , It had a transparent layer ,
I converted simply to jpg format and it was sent to reviews
Just to let others know if they are facing the same issue. You can try in Kaggle if you face this issue in colab.
Generally, You can find different version of WIX in it's official Github Page from link below:
Prerequisites
Install WiX Toolset
Download from: https://wixtoolset.org/releases/
Install "WiX Toolset Visual Studio Extension" if using Visual Studio
Your WPF Project should be built and ready
What @deceze said is a good approach. Instead of updating the counter value every 10 seconds, just store the startDateTime somewhere (e.g., in .env or config.json). Then use this snippet:
<script>
const startDateTime = new Date('2025-11-10T00:00:00Z'); // example start time
const now = new Date();
const diffInSeconds = Math.floor((now - startDateTime) / 1000);
const counterValue = Math.floor(diffInSeconds / 10);
console.log(counterValue);
</script>
I ran into this exact issue with webscraper.io — the "next button" pagination is a pain when you have hundreds of pages.
I ended up switching to BrowserAct because it handles this automatically with natural language prompts. You basically tell it "loop through all pages by clicking next" and it does it without needing to manually map each page.
Here's a working example using their Reddit Scraper template — it automatically loops through Reddit posts (which use infinite scroll/next button navigation) and extracts all the data. Same logic works for any pagination.
I am also facing the same issue with my report, and I checked all the filters in Google Analytics & Power BI are the same. Still, the total does not match.
Chanyanut1103701965541 hiphoppoprock hungnaek caeuq daegbied hungnaek de wnggai yungh lai cungj sojyoujci swhbwnj aeu genhciz
This issue mainly occurs when the mocks used in the test cases are not actually used and need to remove the unnecessary mocks.
@Robin Hossain, have you found a solution to this problem?
On my environment, there is rpds.cpython-310-x86_64-linux-gnu.so in the rpds directory.
It's for Python 3.10. but, my environment was for Python 3.12.
I changed the runtime to 3.10 then it works.
It works and lets me download if I try on JSFiddle (My browser blocks downloads from iframes)
Hi I had this exact error a few minutes ago... It all disappeared when i created a virtual environment, activated it, downloaded dbt-core and dbt-postgres adapter, then i ran my dbt command using my activated virtual environment.
Also, check your python version, older versions of dbt seem to have beef with some python versions https://docs.getdbt.com/faqs/Core/install-python-compatibility
just like:
bytes_string = b'3\x00\x02\x05\x15\x13GO\xff\xff\xff\xff\xff\xff\xff\xff'
num_string = bytes_string.hex()
print(num_string)
# 330002051513474
# num = int(num_string)
here are some good resources focused on Python application development that will be useful.
1. Create GUI Application with Python and Qt6 by Martin Fitzpatrick.
2. Mastering GUI Programming with Python by Alan D. Moore.
3. Learning Python Application Development by Ninad Sathaye.
4. Core Python Application Programming by Wesley J. Chun.
5. Hands-On Enterprise Application Development with Python by Riaz Ahmed
Hope this will help.
Try adding these in your Runner.entitlements after <dict>
<key>aps-environment</key>
<string>development</string>
Still doesn't work? Make sure you complete the 3rd step
https://firebase.flutter.dev/docs/messaging/apple-integration/
This is what I use:
/* (A Name Here if you want ;3) */
var code = 'Hello, World!';
alert(code);
/* (An optional Ending sentence.) */
If you don't want a name or ending then just make them both /**/.
And if you want to commentanize it, then just remove the second forward slash:
/**/ <-- This one! :3
/**/
So this would be a comment:
/** <-- '/' missing!
alert(1+2);
/**/
You can also replace the last /**/ with //*/ if you want ;\
I like this a lot because it is toggalable by 1 (one) character, and it also look pretty nice imo. Yeah! ;3
We are here to provide you best solution regarding your quarry, CloudBik offers you tenant to tenant migration in very easy steps.
Advantage:- user can also migrate by there self, CloudBik solution provide proper guidance to user.
CloudBik is very secure tool
it provide migration in "0" downtime without any data losses.
Hello and welcome
I’m inviting you to join CS50's Introduction to Computer Science with Python
The exclusive group where great minds come together to learn, share ideas, and grow together. It’s a space for engaging discussions, valuable insights, and real connections with like-minded people. Don’t miss out click the link below to join and introduce yourself once you’re in! https://chat.whatsapp.com/JXWEtGWuHLvKeGrTUSORkP?mode=ems_copy_t
Hello and welcome
I’m inviting you to join CS50's Introduction to Computer Science with Python
The exclusive group where great minds come together to learn, share ideas, and grow together. It’s a space for engaging discussions, valuable insights, and real connections with like-minded people. Don’t miss out click the link below to join and introduce yourself once you’re in! https://chat.whatsapp.com/JXWEtGWuHLvKeGrTUSORkP?mode=ems_copy_t
thanks for reply! I'm adding cost using the AddCost API. But specifically, there are L1NormCost, L2NormCost, QuadraticCost, and some other cost classes in drake. I'm trying to find out how construct a cost in my problem and impose it using AddCost.
This would be possible, with a macOS "Installer Plug-In", however it will not be an easy process.
Installer Plug-Ins allow you to create custom actions that are shown during the install process, as an additional step (such as after the "Read Me" or "License" step). However, in recent years Apple has not provided any documentation regarding the creation of custom installer plug-ins, likely because of the possible security risks they could expose by running arbitrary code. This means that while they are still fully supported as of macOS Tahoe, development has long-since stopped on them, and they could very well be removed in the future.
You can find examples on GitHub such as this registration code installer plug-in, but the common theme among any examples you come across will likely be how dated they are. As a result of this, the sample code is in Objective-C using Storyboards. You could possibly write the configuration data to a .plist file somewhere on disk, and then retrieve it later from your installed application. It may be possible to migrate this code to Swift, but this would require additional effort on your part.
I would recommend following the Installer Plug-In tutorial by Stéphane Sudre (the individual behind the incredibly useful Packages app). The resource was last updated in 2012, however almost nothing has changed about installer plug-ins since this guide was written.
You could technically prompt for user input via osascript in a pre/postinstall script, however this would likely result in an even worse end-user experience and could lead to many issues.
Following up just need more clarification, was there ever a EntityTypeConfiguration<T> base class? What ever happened to it? Analog to a Fluent NHibernate ClassMap<T> for example. No biggie I guess, but would be interesting to have such an enriched base class experience.
I'm new bee.
When I am using Robin 54030 projection world map facing same problem, Polygons are closing themselves from one side to the other.
Second scale in km/miles shows as invalid when using the Robinson/Wagner VII projection. please help how to rectify. I have tried many ways, but it hasn't been solved.
Thanks
There is an issue coming up, but when I installed react-native-reanimated, it came up. What should I do for this
The PermissionError: [Errno 13] Permission denied usually means your Python code (or sub-agent) doesn’t have the proper rights or file path access to write to the target file.
Here are the most common causes and fixes:
Make sure your sub-agent is writing to a path it actually has access to.
file_path = "/path/to/output.txt"
with open(file_path, "w") as f:
f.write("Hello World!")
If this path is inside a restricted system folder (e.g., /root, C:\Program Files, etc.), you’ll get Permission denied.
Fix:
Use a user-writable path like /tmp, ./data/, or os.getcwd()
Example:
import os
file_path = os.path.join(os.getcwd(), "output.txt")
with open(file_path, "w") as f:
f.write("Works fine!")
Check the data types inferred by Glue when creating the table.
If the “Gender” column was inferred as a string, Glue DQ may treat blanks as valid values.
You can manually adjust the schema in the Glue Catalog or apply a schema mapping transform to ensure null handling works properly.
It's not a feature of the VSCode terminal, but of the PowerShell.
You can run Set-PSReadLineOption -PredictionSource None on the PowerShell and predictive IntelliSense would turn off.
BTW this question is off topic. You should've visit StackExchange Super User.
When a Next.js Server Action receives a 401 Unauthorized response from a service like Google Cloud IAP, Next.js's underlying fetch mechanism may not automatically throw an error in the client-side code when used with Server Actions, leading to the observed silent failure and undefined result [1, 2]. This behavior is a known characteristic of how Next.js handles certain server action responses, especially in specific deployment configurations.
Here is a breakdown of why this happens and recommended approaches to handle session expiration:
Why Doesn't Next.js Throw an Error?
The primary reason for the silent failure lies in how Next.js handles the response from the server action's underlying network request:
Server Actions use Fetch: Server actions in Next.js utilize the fetch API under the hood [2].
Next.js Response Handling: Next.js intercepts the response for Server Actions. If the response is a 401, the framework might be processing it in a way that prevents it from bubbling up as a standard JavaScript error that can be caught by the client-side try/catch block [2]. Instead of an error, the result variable is simply undefined.
IAP's Role: The IAP intercepts the request and returns a 401 response before the request even reaches your server action logic. The browser receives the 401, but the Next.js client-side runtime interprets this in a non-error-throwing manner for this specific interaction [1].
How to Detect the Failure on the Client Side
Since the try/catch block fails to catch the error, you need to implement explicit checks within your client component or the server action itself:
1. Check for undefined result in the Client Component
The simplest way is to check if the result is undefined and handle it as an unauthorized state. This approach works because in the broken scenario, the result is always undefined [1].
javascript
'use client';
import { myServerAction } from './actions';
export default function MyComponent() {
const handleClick = async () => {
try {
const result = await myServerAction();
// Explicitly check for an undefined result
if (result === undefined) {
console.error('Session expired or unauthorized');
// Trigger a re-authentication flow or display a message
return;
}
console.log('Result:', result);
} catch (err) {
console.error('Caught error:', err);
}
};
// ...
}
Use code with caution.
2. Implement a Redirect or Session Check in the Server Action
You can add logic within your server action to manually check the session or authentication status and return a specific, informative object.
javascript
'use server';
export async function myServerAction() {
// Check auth status here before any main logic
const isAuthenticated = checkSessionStatus(); // Replace with actual session check
if (!isAuthenticated) {
// Return a specific error object
return { success: false, message: 'Unauthorized or session expired' };
}
// Some logic here
return { success: true, message: 'Hello from server' };
}
Use code with caution.
Then, on the client, check the returned object's properties:
javascript
// Client side
const result = await myServerAction();
if (!result.success) {
console.error(result.message);
// Handle unauthorized state
}
Use code with caution.
Recommended Approach for Handling Session Expiration with IAP
The most robust approach involves a combination of client-side detection and a mechanism to force re-authentication:
Use Client-Side Redirection: The standard IAP flow expects a browser redirect to the Google login page when a 401/403 occurs. However, Server Actions use XHR/fetch requests, which don't automatically trigger a browser-level navigation.
Explicitly Force Re-authentication:
When the client-side code detects an undefined result (as shown in method 1 above), it should assume the session is invalid.
The best user experience is to then force a full page reload or navigate the user to a known protected URL to trigger the IAP login flow.
javascript
// Client side
if (result === undefined) {
console.log('Session expired, redirecting to login...');
// Navigating to the current page will trigger IAP's redirect
window.location.reload();
}
Use code with caution.
Consider a Custom Fetch Wrapper (Advanced): If you find yourself needing a more generic solution across many server actions, you could create a custom utility function that wraps the server action call with enhanced error handling. However, the first two methods are usually sufficient and less complex.
By explicitly checking the result of the server action for undefined on the client side, you can reliably detect IAP's 401 responses and implement the necessary re-authentication flow.
For Material UI 7 you add the colors under colorSchemes in the theme
https://mui.com/material-ui/customization/palette/#color-schemes
const theme = createTheme({
colorSchemes: {
light: {
palette: {
primary: {
main: '#FF5733',
},
},
},
dark: {
palette: {
primary: {
main: '#E0C2FF',
},
},
},
},
});
Web tracking standard/API:
\>🔹 Google Analytics Measurement Protocol – main standard for sending tracking data.
🔹 Google Tag Manager (GTM) – tool for managing tracking tags.
🔹 Conversion APIs – server-side tracking used by Facebook, Google Ads, TikTok, etc.
In the identity server project, you need to register the client application. This is done in the application's seed. You can add an additional registration apart from the one that comes by default.
On the client side, you should reference the identity server as you’re already doing; however, I noticed that some configurations are not entirely correct.
To enable the client application to obtain the current user’s information, an extra step may be required — adding the scopes in both the identity server and the client — and in the client, adding a claim mapping so that the user information is displayed correctly when you use the current user.
This is happening because you're plotting the entire graph
if you plot only the first 8 seconds, you'll probably get the result you want
I found the solution myself.
The answer provided in the following related question solved my problem: Autodesk Platform Services - ACC - Get Roles IDs
I just discovered that I can copy my files from my old phone to my laptop, then, with my new phone connected to Android studio, I can drag/drop the file from Window File Explorer directly into the Android file explorer!
Problem solved.
Then you'll need to modify your app to request all file access https://developer.android.com/training/data-storage/manage-all-files, officially Play Store will only grant it for specific apps, which is why I believe it's simpler to just modify your app to accept shared files instead of going through the hoop (unless it's not a published app)
If I understand your reply correctly, it doesn't address what I'm trying to do. I don't have multiple apps trying to share a file.
On my previous phone, some of my apps created/read/updated app-specific files. Those files were located in "Internal Storage" (not in any subfolder of Internal Storage). As a result, those files were accessible from both my phone's file manager and my PC's file manager (when connected to my phone) if I needed to copy/delete/edit them from outside my apps.
It's my understanding that, when I move my apps to the new phone, the apps (which still need to use the info in those files) can only access files that are in "/data/user/0/myApp/files". So I need to copy my files from "Internal Storage" on my previous phone to "/data/user/0/myApp/files" on my new phone.
I guess my first question should be: Is there a way for my apps on my new phone to access files in "Internal Storage"? If so, then I could simply copy the files over to my new phone. But, if my apps can't access "internal Storage', then how can I copy my files into "/data/user/0/myApp/files" on my new phone so my apps can access them?.
Does this clarify my question?
@Wicket - I appreciate your replies.
With that said, I can't think of a way to reduce scope on either of the two issues and still have the Add-On do what it is supposed to do.
Reading from Sheets. Seems like I need the readonly for Sheets to get my data. I can still use their picker to pick the spreadsheet, but to read the data, I'll need sheets readonly.
For your comment on the slides currentonly. I love this idea and I want to implement it so users won't be as nervous about the Add-On...however I cannot think of anyway to put pie shapes with varying angles into a slides presentation with that scope. There is no way to do it with the API, I tried a number of things and researched here and elsewhere. I finally realized I could do it with a public template and was really happy about my idea working...and now I'm realizing that won't work because of openById even though it's not the users.
I think I'll have to appeal to the Google team and see what they say. They told me to post here first and from what I'm seeing there aren't any ways around it. I need to have my app do less for narrower scopes or appeal for my original scope request.
There are 2 possibilities to do it:
Rebuilding like this: NIXOS_LABEL="somelabel" nixos-rebuild switch
Configuring system.nixos.label and optionally system.nixos.tags in configuration.nix (See the links for full info)
When you use the 2 possibilities at the same time, the first one will get priority.
Important: Labels don't support all types of chars. Spaces won't work.
It's better to install all significant dependencies explicitly. If you want a better way to manage similar dependencies across subpackages, you could use pnpm's catalogs feature.
extension DurationExt on Duration {
String format() {
return [
inHours,
inMinutes.remainder(60),
inSeconds.remainder(60),
].map((e) => e.toString().padLeft(2, '0')).join(':');
}
}
I know this is a while ago, but I have a lead for you as I think I've just fixed this issue at my end (same scenario github codespaces with Snowflake, using SSO.
I changed this setting in vscode from "hybrid" to "process" (note that process reports as being the default)
remote.autoforwardportsource
I was looking for the same thing, and I managed to re-implement Ziggy Routes, and I removed Wayfinder since it's still in beta and I don't know exactly how to use it...
I created a repository, but I forgot the name because I have several. I'll find it and send it to you by email: [email protected]
Or contact me on GitHub: github.com/casimirorocha
-----------------------------------------------------------------------------------------
Oi eu estava querendo a mesma coisa, e consegui implementar de volta o ziggy routes, e removi o wayfinder já que ainda é beta e não sei direito como usar....
Eu criei um repositório, mas esqueci o nome pois tenho vários, vou achar e te mandar popor email: [email protected]
Ou me chama no github: github.com/casimirorocha
Off topic. NB Please lay off the boldface. It doesn't help.
Frame can be applied to menu item as such
Menu("Options") {
Button("Option 1") {
}
Button("Option 2") {
}
}
.frame(width: 50)
The output will be as below. (Please ignore the button).
Hi friends how are you today where are you from can you walking today so you book right now I will be there online tonight walk to hotel and
🐞 مشكلة: فشل إنشاء تقرير Cucumber
عند محاولة توليد تقرير باستخدام maven-cucumber-reporting، تظهر الرسالة التالية:
net.masterthought.cucumber.ValidationException: No report file was added!
📌 السبب المحتمل
هذه الرسالة تعني أن الـ plugin لم يجد أي ملف JSON صالح لتوليد التقرير منه. غالبًا ما يكون السبب:
- عدم تنفيذ اختبارات Cucumber قبل مرحلة verify
- عدم إنشاء الملف target/cucumber.json بسبب فشل أو غياب الاختبارات
- مسار غير صحيح أو مفقود في إعدادات pom.xml
✅ الحلول المقترحة
1. تنفيذ الاختبارات قبل توليد التقرير
`bash
mvn clean test
mvn verify
`
\> تأكد من أن mvn test يُنتج ملف cucumber.json في مجلد target.
2. التحقق من وجود ملف JSON
بعد تنفيذ الاختبارات، تأكد من وجود الملف:
`bash
ls target/cucumber.json
3. إعداد صحيح لـ @CucumberOptions
`java
@CucumberOptions(
features = "src/test/resources/features",
glue = {"steps"},
plugin = {"pretty", "json:target/cucumber.json"},
monochrome = true,
publish = true
)
4. إعداد صحيح لـ pom.xml
`xml
<plugin>
\<groupId\>net.masterthought\</groupId\>
\<artifactId\>maven-cucumber-reporting\</artifactId\>
\<version\>5.7.1\</version\>
\<executions\>
\<execution\>
\<id\>execution\</id\>
\<phase\>verify\</phase\>
\<goals\>
\<goal\>generate\</goal\>
\</goals\>
\<configuration\>
\<projectName\>cucumber-gbpf-graphql\</projectName\>
\<skip\>false\</skip\>
\<outputDirectory\>${project.build.directory}\</outputDirectory\>
\<inputDirectory\>${project.build.directory}\</inputDirectory\>
\<jsonFiles\>
\<param\>/\*.json\</param\>
\</jsonFiles\>
\<checkBuildResult\>false\</checkBuildResult\>
\</configuration\>
\</execution\>
\</executions\>
</plugin>
`
🧪 اختبار يدوي (اختياري)
`java
File reportOutputDirectory = new File("target");
List<String> jsonFiles = Arrays.asList("target/cucumber.json");
Configuration config = new Configuration(reportOutputDirectory, "اسم المشروع");
ReportBuilder reportBuilder = new ReportBuilder(jsonFiles, config);
reportBuilder.generateReports();
`
🧠 ملاحظات إضافية
- تأكد من أن ملفات .feature موجودة وتُنفذ فعليًا
- تحقق من أن ملفات الاختبار تحتوي على @RunWith(Cucumber.class) أو @Cucumber حسب نوع JUnit
- استخدم mvn clean test verify كأمر موحد لضمان الترتيب الصحيح
\> 💬 إذا استمرت المشكلة، راجع سجل التنفيذ (target/surefire-reports) أو فعّل debug في Maven للحصول على تفاصيل أعمق.
To expand on previous answers, you can get a nice re-usable Group component similar to the one in Mantine like this:
import { View, ViewProps } from "react-native";
export function Group(props: ViewProps) {
return <View style={[{ flexDirection: "row" }, props.style]} {...props} />;
}
Assuming your project is using typescript, the ViewProps usage above allows passing through any other properties and preserves type hints
Yes. It was impossible to switch directly. So, I made a working switch. Here's the fix:
A post on retrocomputing gives more details. Note that LOADALL apparently couldn't do it, but was wrongly rumored to be able to:
Pm32 -> pm16 -> real 16 (wrap function caller) -> real 16 ( the function call) -> pm32 (resume 32) -> ret to original caller.
uint16_t result = call_real_mode_function(add16_ref, 104, 201); // argc automatically calculated
print_args16(&args16_start);
terminal_write_uint("\nThe result of the real mode call is: ", result);
uint16_t result2 = call_real_mode_function(complex_operation, 104, 201, 305, 43); // argc automatically calculated
print_args16(&args16_start);
terminal_write_uint("\nThe result of the real mode call is: ", result2);
// Macro wrapper: automatically counts number of arguments
#define call_real_mode_function(...) \
call_real_mode_function_with_argc(PP_NARG(__VA_ARGS__), __VA_ARGS__)
// Internal function: explicit argc
uint16_t call_real_mode_function_with_argc(uint32_t argc, ...) {
bool optional = false;
if (optional) {
// This is done later anyway. But might as well for now
GDT_ROOT gdt_root = get_gdt_root();
args16_start.gdt_root = gdt_root;
uint32_t esp_value;
__asm__ volatile("mov %%esp, %0" : "=r"(esp_value));
args16_start.esp = esp_value;
}
va_list args;
va_start(args, argc);
uint32_t func = va_arg(args, uint32_t);
struct realmode_address rm_address = get_realmode_function_address((func_ptr_t)func);
args16_start.func = rm_address.func_address;
args16_start.func_cs = rm_address.func_cs;
args16_start.argc = argc - 1;
for (uint32_t i = 0; i < argc; i++) {
args16_start.func_args[i] = va_arg(args, uint32_t); // read promoted uint32_t
}
va_end(args);
return pm32_to_pm16();
}
GDT16_DESCRIPTOR:
dw GDT_END - GDT_START - 1 ;limit/size
dd GDT_START ; base
GDT_START:
dq 0x0
dq 0x0
dq 0x00009A000000FFFF ; code
dq 0x000093000000FFFF ; data
GDT_END:
section .text.pm32_to_pm16
pm32_to_pm16:
mov eax, 0xdeadfac1
; Save 32-bit registers and flags
pushad
pushfd
push ds
push es
push fs
push gs
; Save the stack pointer in the first 1mb (first 64kb in fact)
; So its accessible in 16 bit, and can be restored on the way back to 32 bit
sgdt [args16_start + GDT_ROOT_OFFSET]
mov [args16_start + ESP_OFFSET], esp ;
mov ax, ss
mov [args16_start + SS_OFFSET], ax ;
mov esp, 0 ; in case i can't change esp in 16 bit mode later. Don't want the high bit to fuck us over
mov ebp, 0 ; in case i can't change esp in 16 bit mode later. Don't want the high bit to fuck us over
cli
lgdt [GDT16_DESCRIPTOR]
jmp far 0x10:pm16_to_real16
/* Reference version (purely for comparison) */
__attribute__((section(".text.realmode_functions"))) int16_t complex_operation(uint16_t a, uint16_t b, uint16_t c, uint16_t d) {
return 2 * a + b - c + 3 * d;
}
/* Reference version (purely for comparison) */
__attribute__((section(".text.realmode_functions"))) uint16_t add16_ref(uint16_t a, uint16_t b) {
return 2 * a + b;
}
resume32:
; Restore segment registers
mov esp, [args16_start + ESP_OFFSET]
mov ax, [args16_start + SS_OFFSET]
mov ss, ax
mov ss, ax
pop gs
pop fs
pop es
pop ds
; Restore general-purpose registers and flags
popfd
popad
; Retrieve result
movzx eax, word [args16_start + RET1_OFFSET]
; mov eax, 15
ret
The struct located in the first 64kb of memory, to allow multi segment data passing.
typedef struct __attribute__((packed)) Args16 {
GDT_ROOT gdt_root;
// uint16_t pad; // (padded due to esp wanting to)
uint16_t ss;
uint32_t esp;
uint16_t ret1;
uint16_t ret2;
uint16_t func;
uint16_t func_cs;
uint16_t argc;
uint16_t func_args[13];
} Args16;
To see a simpler version of this:
commit message: "we are so fucking back. Nicegaga"
Commit date: Nov 7, 3:51 am
(gaga typo accidentally typed)
hash: 309ca54630270c81fa6e7a66bc93
and a more modern and cleaned (The one with the code show above):
commit message: Changed readme.
commit date: Sun Nov 9 18:09:16
commit hash: a2058ca7e3f99e92ea7c76909cc3f7846674dc83
====
Hm, I'm not seeing the used key parts/key length be what it should be. Looks like it is using the sports id as a filter but not actually part of the index lookup. Can you please include output of show create table Bet; and show create table BetSelection;?
Just in case someone would search for the answer as me.
As @jhasse said it is really easy with Clang. All you need to do is just to build clang with openmp runtime support, so working set of build commands would be like
git clone https://github.com/llvm/llvm-project
cd llvm-project
mkdir build
cd build
cmake -DLLVM_ENABLE_PROJECTS="clang;lld" -DLLVM_ENABLE_RUNTIMES="openmp;compiler-rt" -DLLVM_TARGETS_TO_BUILD="host" -DLLVM_USE_SANITIZER="" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local/llvm -G "Unix Makefiles" ../llvm
in build directory inside llvm-project one. Then you can also make installation. Or you really can dive into separate OpenMP build.
See also here (there is in-build tool called Archer that was mentioned by @
Simone Atzeni).
onclick="location.href='About:blank';"
*pseudo code
If both positive:
ABS(a-b) or MAX(a,b) - MIN(a,b)
If both negative:
ABS(ABS(a) - ABS(b)) or MAX(a,b) - MIN(a,b)
If one positive and one negative:
MAX(a,b) - MIN(a,b)
Hence for all situations:
MAX(a,b) - MIN(a,b)
Don't know why you have an error, but you have an issue with your url
There is a missing & between response_type and scope
response_type=code&scope=user_profile,user_media
May be only a typo
I figured it out. The search API string for the next call simply needs to be appended with "&nextPageToken=" and then the token, and keeping all of the same search criteria and returned fields.
STEP 1. Find the Dirt.
Start data cleaning by determining what is wrong with your data.
Look for the following:
Are there rows with empty values? Entire columns with no data? Which data is missing and why?
How is data distributed? Remember, visualizations are your friends. Plot outliers. Check distributions to see which groups or ranges are more heavily represented in your dataset.
Keep an eye out for the weird: are there impossible values? Like “date of birth: male”, “address: -1234”.
Is your data consistent? Why are the same product names written in uppercase and other times in camelCase?
STEP 2: SCRUB THE DIRT
Missing Data
Outliers Contaminated
Data Inconsistent : You have to expect inconsistency in your data.
Especially when there is a higher possibility of human error (e.g. when salespeople enter the product info on proforma invoices manually).
The best way to spot inconsistent representations of the same elements in your database is to visualize them.
Plot bar charts per product category.
Do a count of rows by category if this is easier.
When you spot the inconsistency, standardize all elements into the same format.
Humans might understand that ‘apples’ is the same as ‘Apples’ (capitalization) which is the same as ‘appels’ (misspelling), but computers think those three refer to three different things altogether.
Lowercasing as default and correcting typos are your friends here.
Data Invalid
Data Duplicate
Data Data Type Issues
Structural Errors
The majority of data cleaning is running reusable scripts, which perform the same sequence of actions. For example: 1) lowercase all strings, 2) remove whitespace, 3) break down strings into words.
Problem discovery. Use any visualization tools that allow you to quickly visualize missing values and different data distributions.
Identify the problematic data
Clean the data
Remove, encode, fill in any missing data
Remove outliers or analyze them separately
Purge contaminated data and correct leaking pipelines
Standardize inconsistent data
Check if your data makes sense (is valid)
Deduplicate multiple records of the same dataForesee and prevent type issues (string issues, DateTime issues)
Remove engineering errors (aka structural errors)
Rinse and repeat
HANDLING MISSING VALUES
The first thing I do when I get a new dataset is take a look at some of it. This lets me see that it all read in correctly and get an idea of what's going on with the data. In this case, I'm looking to see if I see any missing values, which will be reprsented with NaN or None.
nfl_data.sample(5)
Ok, now we know that we do have some missing values. Let's see how many we have in each column.
# get the number of missing data points per column
missing_values_count = nfl_data.isnull().sum()
# look at the # of missing points in the first ten columns
missing_values_count[0:10]
That seems like a lot! It might be helpful to see what percentage of the values in our dataset were missing to give us a better sense of the scale of this problem:
# how many total missing values do we have?
total_cells = np.product(nfl_data.shape)
total_missing = missing_values_count.sum()
# percent of data that is missing
(total_missing/total_cells) * 100
Wow, almost a quarter of the cells in this dataset are empty! In the next step, we're going to take a closer look at some of the columns with missing values and try to figure out what might be going on with them.
One of the most important question you can ask yourself to help figure this out is this:
Is this value missing becuase it wasn't recorded or becuase it dosen't exist?
If a value is missing becuase it doens't exist (like the height of the oldest child of someone who doesn't have any children) then it doesn't make sense to try and guess what it might be.
These values you probalby do want to keep as NaN. On the other hand, if a value is missing becuase it wasn't recorded, then you can try to guess what it might have been based on the other values in that column and row.
# if relevant
# replace all NA's with 0
subset_nfl_data.fillna(0)
# replace all NA's the value that comes directly after it in the same column,
# then replace all the reamining na's with 0
subset_nfl_data.fillna(method = 'bfill', axis=0).fillna(0)
# The default behavior fills in the mean value for imputation.
from sklearn.impute import SimpleImputer
my_imputer = SimpleImputer()
data_with_imputed_values = my_imputer.fit_transform(original_data)
----------
import pandas as pd
import numpy as np
# for Box-Cox Transformation
from scipy import stats
# for min_max scaling
from mlxtend.preprocessing import minmax_scaling
# plotting modules
import seaborn as sns
import matplotlib.pyplot as plt
# return a dataframe showing the number of NaNs and their percentage
total = df.isnull().sum().sort_values(ascending=False)
percent = (df.isnull().sum() / df.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(20)
# replace NaNs with 0
df.fillna(0, inplace=True)
# replace NaNs with the column mean
df['column_name'].fillna(df['column_name'].mean(), inplace=True)
# replace NaNs with the column median
df['column_name'].fillna(df['column_name'].median(), inplace=True)
# linear interpolation to replace NaNs
df['column_name'].interpolate(method='linear', inplace=True)
# replace with the next value
df['column_name'].fillna(method='backfill', inplace=True)
# replace with the previous value
df['column_name'].fillna(method='ffill', inplace=True)
# drop rows containing NaNs
df.dropna(axis=0, inplace=True)
# drop columns containing NaNs
df.dropna(axis=1, inplace=True)
# replace NaNs depending on whether it's a numerical feature (k-NN) or categorical (most frequent category)
from sklearn.impute import SimpleImputer
missing_cols = df.isna().sum()[lambda x: x > 0]
for col in missing_cols.index:
if df[col].dtype in ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']:
imputer = KNNImputer(n_neighbors=5)
imputer = SimpleImputer(strategy='mean') # or 'median', 'most_frequent', or 'constant'
imputer = SimpleImputer(strategy='constant', fill_value=0) # replace with 0
df[col] = imputer.fit_transform(df[col].values.reshape(-1, 1))
# if test set
# df_test[col] = imputer.fit_transform(df_test[col].values.reshape(-1, 1))
else:
df[col] = df[col].fillna(df[col].mode().iloc[0])
# if test set
# df_test[col] = df_test[col].fillna(df_test[col].mode().iloc[0])
parsing date
https://strftime.org/ Some examples:
1/17/07 has the format "%m/%d/%y"
17-1-2007 has the format "%d-%m-%Y"
# create a new column, date_parsed, with the parsed dates
landslides['date_parsed'] = pd.to_datetime(landslides['date'], format = "%m/%d/%y")
One of the biggest dangers in parsing dates is mixing up the months and days. The to_datetime() function does have very helpful error messages, but it doesn't hurt to double-check that the days of the month we've extracted make sense
# remove na's
day_of_month_landslides = day_of_month_landslides.dropna()
# plot the day of the month
sns.distplot(day_of_month_landslides, kde=False, bins=31)
reading files with encoding problems
# try to read in a file not in UTF-8
kickstarter_2016 = pd.read_csv("../input/kickstarter-projects/ks-projects-201612.csv")
# look at the first ten thousand bytes to guess the character encoding
with open("../input/kickstarter-projects/ks-projects-201801.csv", 'rb') as rawdata:
result = chardet.detect(rawdata.read(10000))
# check what the character encoding might be
print(result)
So chardet is 73% confidence that the right encoding is "Windows-1252". Let's see if that's correct:
# read in the file with the encoding detected by chardet
kickstarter_2016 = pd.read_csv("../input/kickstarter-projects/ks-projects-201612.csv", encoding='Windows-1252')
# look at the first few lines
kickstarter_2016.head()
INCONSISTENT DATA
INCONSISTENT DATA
# get all the unique values in the 'City' column
cities = suicide_attacks['City'].unique()
# sort them alphabetically and then take a closer look
cities.sort()
cities
Just looking at this, I can see some problems due to inconsistent data entry: 'Lahore' and 'Lahore ', for example, or 'Lakki Marwat' and 'Lakki marwat'.
# convert to lower case
suicide_attacks['City'] = suicide_attacks['City'].str.lower()
# remove trailing white spaces
suicide_attacks['City'] = suicide_attacks['City'].str.strip()
It does look like there are some remaining inconsistencies: 'd. i khan' and 'd.i khan' should probably be the same.
I'm going to use the fuzzywuzzy package to help identify which string are closest to each other.
# get the top 10 closest matches to "d.i khan"
matches = fuzzywuzzy.process.extract("d.i khan", cities, limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio)
# take a look at them
matches
We can see that two of the items in the cities are very close to "d.i khan": "d. i khan" and "d.i khan". We can also see the "d.g khan", which is a seperate city, has a ratio of 88. Since we don't want to replace "d.g khan" with "d.i khan", let's replace all rows in our City column that have a ratio of > 90 with "d. i khan".
# function to replace rows in the provided column of the provided dataframe
# that match the provided string above the provided ratio with the provided string
def replace_matches_in_column(df, column, string_to_match, min_ratio = 90):
# get a list of unique strings
strings = df[column].unique()
# get the top 10 closest matches to our input string
matches = fuzzywuzzy.process.extract(string_to_match, strings,
limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio)
# only get matches with a ratio > 90
close_matches = [matches[0] for matches in matches if matches[1] >= min_ratio]
# get the rows of all the close matches in our dataframe
rows_with_matches = df[column].isin(close_matches)
# replace all rows with close matches with the input matches
df.loc[rows_with_matches, column] = string_to_match
# let us know the function's done
print("All done!")
# use the function we just wrote to replace close matches to "d.i khan" with "d.i khan"
replace_matches_in_column(df=suicide_attacks, column='City', string_to_match="d.i khan")
REMOVING A CARACTER THAT WE DONT WANT
df['GDP'] = df['GDP'].str.replace('$', "")
TO CONVERT STR TO NUMERICAL
#df['GDP'] = df['GDP'].astype(float)
#Si on est géné par des caractère dans la conversion
df['GDP'] = df['GDP'].str.replace(',', '').astype(float)
TO ENCODE CATEGORICAL VARIABLES
# For variables taking more than 2 values
from sklearn.preprocessing import OrdinalEncoder
ordinal_encoder = OrdinalEncoder()
df['Country'] = ordinal_encoder.fit_transform(df[['Country']])
#TO define the encoding ourself
custom_categories = [['High School', 'Bachelor', 'Master', 'Ph.D'], [0, 1, 2, 3]]
ordinal_encoder = OrdinalEncoder(categories=custom_categories)
It seems like GitHub only detects the specific licenses if they are on the default (nowadays, main) branch.
Here's what I've observed just now
on my dev branch, the tabs next to README all read "License"
on my main branch the tabs next to README read their full license names (e.g. "MIT License")
on any branch, the licenses listed in the About section (top right) depended on the main branch licenses.
dev had licenses but main didn't, the About section would not list any licenses, but the tabs next to README would read "License".By "tabs" I mean these things:
Should be enough to edit your .Renviron file and add
http_proxy=http://proxy.foo.bar:8080/
http_proxy_user=user_name:password